![]() METHOD FOR LEARNING A NETWORK OF NEURONES ON BOARD AN AIRCRAFT FOR AID IN LANDING SAID AIRCRAFT AND
专利摘要:
The method uses an aircraft fleet being equipped with at least one rada sensor, it comprises at least: - A step of collective collection of radar images by a set of aircraft (A1,… AN) of said fleet, said radar images being obtained by the radar sensors of said aircraft (A1,… AN) in nominal landing phases on said runway, step in which each image collected by an aircraft is labeled with at least one position information of said runway with respect to said aircraft, said labeled image being sent (42) to a shared database (41) and stored in said database; - A step of learning by a neural network (3) of said track from the labeled images stored in said shared database (41), at the end of said step said neural network being trained to; A step of sending (43) said trained neural network to at least one of said aircraft (A1). Figure for the abstract: Fig. 4 公开号:FR3089038A1 申请号:FR1871698 申请日:2018-11-22 公开日:2020-05-29 发明作者:Yoan VEYRAC;Patrick Garrec;Pascal Cornic 申请人:Thales SA; IPC主号:
专利说明:
Description Title of the invention: METHOD OF LEARNING A NETWORK OF NEURONES ON BOARD AN AIRCRAFT FOR AID FOR LANDING SAID AIRCRAFT AND SERVER FOR THE IMPLEMENTATION OF SUCH A PROCESS [0001] The present invention relates to a method of learning a neural network on board an aircraft to aid in the landing of said aircraft. The invention also relates to a server for implementing such a method. The technical field of the invention is that of the detection and recognition of an environment relative to the position of an observer. The main area of operation is that of radar, for landing aid applications. This invention relates more specifically to landing aid systems "EVS" (Enhanced Vision System). The invention could be applied to other sensors (for example optical or electro-optical). The invention addresses in particular the problem of aid for landing aircraft on an airstrip, in conditions of reduced visibility, in particular because of difficult weather conditions, for example in the case of fog. The standards impose rules for obtaining visibility during the landing phase. These rules translate into decision thresholds which refer to the altitude of the aircraft during its descent phase. At each of these thresholds, identified visual cues must be obtained to continue the landing maneuver, otherwise it must be abandoned. Abandoned landing operations are a real problem for air traffic management and flight planning. Before taking off, the ability to be able to land at destination must be estimated on the basis of more or less reliable weather forecasts, and if necessary, provide fall-back solutions. The problem of landing aircraft in conditions of reduced visibility has been the subject of the development of several techniques which are currently used. One of these techniques is the Instrument Landing System (ILS). The ILS system is based on radio frequency equipment installed on the ground, at the level of the runway, and a compatible instrument placed on board the aircraft. The use of such a guidance system requires expensive equipment and a specific qualification of the pilots. Furthermore, it cannot be installed at all airports. It requires maintenance by aircraft used to calibrate the system. This system is not generalized and is in the phase of withdrawal from operation. Another alternative is the GPS landing aid. Although having sufficient accuracy, the reliability of this solution is too low since it can easily - intentionally or unintentionally - be subject to interference. Its integrity is not guaranteed. [0007] Finally, an augmented vision technique is also used (Enhanced Vision System, EVS). The principle is to use more efficient sensors than the pilot's eye in degraded weather conditions, and to embed the information collected in the pilot's field of vision, by means of a head-up display or on the visor of a helmet worn by the pilot. This technique is essentially based on the use of sensors to detect the radiation from the lamps placed along the runway and on the approach ramp. Incandescent lamps produce visible light but they also emit in the infrared range. Sensors in the infrared range can detect this radiation and the detection range is better than that of humans in the visible range, during degraded weather conditions. Improved visibility therefore makes it possible to improve the approach phases to a certain extent and to limit abandoned approaches. However, this technique is based on parasitic infrared radiation from the lamps present in the vicinity of the runway. For the sake of lamp durability, the current trend is to replace incandescent lamps with LED lamps. The latter have a less extended spectrum in the infrared range. A collateral effect is therefore to provoke a technical obsolescence of EVS systems based on infrared sensors. An alternative to infrared sensors is to obtain images by a radar sensor, in centimeter or millimeter band. Certain frequency bands chosen outside the absorption peaks of water vapor have a very low sensitivity to harsh weather conditions. Such sensors therefore make it possible to produce an image through fog for example. However, even if these sensors have a fine distance resolution, they have a much coarser angular resolution than optical solutions. The resolution is directly related to the size of the antennas used, and it is often too coarse to obtain an accurate positioning of the landing strip at a sufficient distance to perform the registration maneuvers. There is therefore a need for new technical solutions to guide the approach maneuver for a landing in conditions of reduced visibility. An object of the invention is in particular to allow such guidance in conditions of reduced visibility. To this end, the subject of the invention is a method for learning a neural network on board an aircraft for assisting in the landing of said aircraft on at least one given runway, said method using a fleet of aircraft being fitted with at least one radar sensor and comprising at least: - A step of collective collection of radar images by a set of aircraft from said fleet, said radar images being obtained by the radar sensors of said aircraft in nominal landing phases on said runway, step in which each image collected by an aircraft is labeled with at least one position information of said runway relative to said aircraft, said labeled image being sent to a shared database and stored in said database; - A step of learning by a neural network of said track from the labeled images stored in said shared database, at the end of said step said neural network being trained; - A step of sending said trained neural network to at least one of said aircraft. In a particular implementation, said neural network transmits to a display and / or control means the trajectory of said aircraft. Said database comprises for example labeled radar images specific to several landing strips, the labeled images comprising identification of the imaged runway. Each labeled image includes for example the identification of the aircraft having emitted said image. In a particular mode of implementation: - the radar images being marred by a bias specific to the installation of said radar sensor on each aircraft, said bias is estimated for each radar image before its storage in said database, the estimated bias being stored with said image ; The trained neural network being transmitted to a given aircraft with the bias estimated specific to this aircraft. The estimation of said bias for a given aircraft and for a given runway is for example carried out by comparison between at least one radar image obtained by the radar sensor equipping said aircraft and a reference image of said runway and its environment. Said reference image is for example constituted by a digital terrain model. The means of transmitting said labeled images between an aircraft and said database are for example by means of the radar sensor fitted to said aircraft, the transmissions being carried out by modulating the data forming said images on the radar wave. In a particular embodiment, an aircraft carrying said radar sensor, the labeling of said images includes at least one of the following indications: - Date of acquisition of the image relative to the time of touchdown of said wearer on the runways; - Location of said carrier at the time of image taking: * Absolute: GPS position; * Relative to the runway: inertial unit; - Altitude of said carrier; - Attitude of said carrier; - Speed vector of said carrier (obtained by said radar sensor as a function of its speed relative to the ground); Acceleration vector of said carrier (obtained by said radar sensor as a function of its speed relative to the ground); Position, relatively to said carrier, of the track and of reference structures obtained by optical means with precise location. Said database is for example updated throughout the nominal landings made by said aircraft on at least said runway. The invention also relates to a server comprising a database for learning an onboard neural network for the implementation of the method as described above, said server being able to communicate with aircraft. Said neural network is for example trained in said server, the trained network being transmitted to at least one of said aircraft. Other characteristics and advantages of the invention will become apparent with the aid of the description which follows, given with reference to the appended drawings which represent: [fig.l] Figure 1, an exemplary embodiment of a landing aid device used by the method according to the invention; [fig.2] Figure 2, an illustration of an operational landing phase performed using the device of Figure 1; [fig.3] Figure 3, a representation of a neural network allowing the positioning of a carrier relative to a given track from a sequence of radar images; [fig.4] Figure 4, an illustration of the principle of collective learning according to the invention; [fig.5] FIG. 5, a chain of exploitation of the training and restitution data of trained neural networks, in the implementation of the invention; [fig.6] Figure 6, a representation of an example of an aircraft bias estimate. To guide an aircraft to join a runway, the invention advantageously combines a radar sensor, very insensitive to weather conditions, and a network of neurons, both on board the aircraft. This neural network shares a learning base of radar images with neural networks of other aircraft, this base being updated collectively by these aircraft by a stream of radar images taken during landing phases. The use of the complete environment of an airport and the landing strip allows precise positioning thanks to the on-board neural network, and trained on several landings. We begin by describing the part of the landing aid system to guide an aircraft. The description is made to join a given track. FIG. 1 shows, in accordance with the invention, a device for assisting in the landing of an aircraft by detection and positioning of the landing runway with respect to this aircraft. It includes at least: - A radar sensor 1 carried by said aircraft, having the function in particular of obtaining radar images of the landing strips; - A functional block 2 for collecting on-board radar images, performing at least the labeling and storage of radar images obtained by the sensor 1, the stored images then being transmitted to a database 10 shared with d 'other aircraft as will be described later; - A functional block 3 comprising a neural network, embedded in the aircraft, the neural network being driven from the collection of radar images, that is to say from the radar images obtained during nominal landings (in clear weather) and whose function is to estimate the position of the airstrip relative to the carrier thanks to the radar images obtained in real time by the radar sensor 1 (i.e. obtained during the current landing), these images being stored in a database associated with the collection system 2; - Another functional block 4, also on board, carrying out the exploitation and the formatting of the data coming from the neural network making it possible to format these data in a suitable interface, this interface being able to allow the display of the track or symbols representing it, or even providing flight commands making it possible to rejoin the nominal landing trajectory. The device comprises for example a system 5 for displaying the runway, visual cues and relevant navigation data integrated into the pilot's field of vision via a head-up display system (HUD) or a helmet, all other viewing system is possible. The collection block, the neural network and the data processing block 4 are for example integrated into the flight computer of the aircraft. Once labeled, the radar images are used by the neural network 3 as will be described later. They serve to drive it. More specifically, the neural network learns the landing strip from all the images stored and labeled in the learning base. Once trained, the neural network 3 is able, from a series of radar images, to position the runway and its environment relative to the carrier, more particularly to position the landing point. The neural network input images are the images taken by radar 1 in the current landing phase. It is then possible, for the functional block 4, to estimate and correct the deviation of the corresponding trajectory (trajectory of the carrier in the current landing) relative to a nominal landing trajectory. It is also possible to display the runway in the pilot's field of vision using the display system 5. The radar sensor 1 operates for example in the centimeter band or in the millimeter band, it makes it possible to position the carrier relative to a landing strip on which the latter wishes to land, regardless of the visibility conditions of the pilot. The radar images obtained can be direct radar images or SAR (“Synthetic Aperture Radar”) images. These make it possible to refine the angular precision while benefiting from the change of angle of view of the movement of the wearer. The radar images are obtained by the radar sensor 1 at each landing phase, which makes it possible to continuously enrich the radar image database. As previously noted, the acquisition of this radar data is performed during nominal landing maneuvers, in clear weather, day or night. This acquisition is also made under different possible conditions of aerology and maneuver (different types of wind, different cross arrivals, different approach slopes), the information of all these conditions being contained in the labeling of the images. Once the landing is complete, the radar data obtained during the landing phase is recorded in the database and labeled as part of a nominal landing maneuver. The labeling includes at least this nominal landing information, but it can advantageously be extended to the following additional information, depending on the availability on the carrier: [0048] - Date of acquisition of the image relative to the time of touch carrier wheels on the tracks; - Location of the wearer at the time of taking the image: * Absolute: GPS position; * Relative to the runway: inertial unit; - Altitude of the carrier; - Attitude of the wearer; - Carrier speed vector (obtained by radar 1 as a function of its speed relative to the ground); - Carrier acceleration vector (obtained by radar 1 as a function of its speed relative to the ground); - Position, relative to the carrier, of the runway and of reference structures obtained by optical means with precise location. FIG. 2 illustrates the operational operating method implemented by the device illustrated in FIG. 1 for the landing aid of an aircraft. This method comprises the steps described below with reference to FIG. 2, for a given landing strip. Two preliminary steps, not shown, relate to the collection of the images and the learning of the track from the collected images. A first step 21 takes a first series of radar images, these images being radar images or SAR images obtained by the radar sensor 1 on board the aircraft. Each radar image is labeled in accordance with the images already recorded in the collection system 2. In the second step 22, the situation of the wearer with respect to the runway and his environment is estimated by means of the neural network, from the series of radar images obtained. It is possible to provide a series consisting of a single radar image, the estimation being able to be carried out from a single image. In a third step 23, the estimate provided by the neural network 3 is exploited, this exploitation being carried out using functional block 4 for example. The latter provides the formatted data for the display (performed by the display system 5) and for providing the flight controls for correcting the trajectory. It also makes it possible to present in an exploitable form the confidence indicator calculated by the neural network. At the end of this third step 24, if the aircraft is not in the final landing phase (that is to say at the point of joining the landing runway), the process loops back to the first step 21 where a new series of radar images is obtained. Otherwise, if the aircraft is in the final landing phase, the final positioning of the aircraft relative to the runway is achieved with the final landing trajectory. The landing method used by the invention is therefore based on learning the landing sequence and the associated radar images. This method requires learning data (labeled radar images) to function properly. In particular, the positioning accuracy depends on the number of training data available, and updated. The method according to the invention uses several aircraft each equipped with the same landing aid as that described in relation to Figures 1 and 2. The radar sensor 1 makes it possible to take images of the runway environment and to position the carrier (position, speed and altitude) relative to the runway using these images by means of the neural network 3. This requires a prior learning step, during which the radar images obtained during nominal landing phases are labeled with the data available during the landing, as described above. These labeled images are used to train the neural network of the landing aid. Once trained, the neural network makes it possible, using images obtained during a landing in conditions of reduced visibility, to obtain the relative positioning of the carrier relative to the runway. Operational operation has been described with reference to FIG. 2. The accuracy of this positioning depends on the quality of the learning carried out, and in particular on the number of images available for this learning. The higher the number, the better the quality of learning. According to the invention, the radar image database 10 is enriched collectively by several aircraft. More specifically, for a given landing runway, this base is enriched by the images obtained during the landing phases of several aircraft. The same database can contain images specific to several tracks. FIG. 3 represents a neural network allowing the positioning of the carrier relative to a PI track from a sequence of radar images. More particularly, FIG. 3 shows the inputs and outputs of this neural network 3 when it is used during a landing on this PI runway. This network takes as input radar images 31, and optionally data 32 from complementary sensors, such as GPS for example. Thanks to this data, the neuron network establishes the positioning of the carrier relative to the track. This positioning includes the attitude of the wearer. It can be enriched with its speed and the estimated touchdown point. FIG. 4 more particularly illustrates the method according to the invention. The invention provides a collective learning method allowing each on-board landing aid device to have a solid, tested and updated reference base for at least one landing strip where the carrier is brought to ask. This reference base can advantageously include the learning data of several tracks. The method according to the invention uses a fleet of N aircraft Al, ... AN each equipped with a landing aid device according to Figure 1. At each landing phase, each device sends 42 the landing data, including the labeled images, to a centralized server 41, for example located on the ground, this server containing the database 10. At the same time as the labeled images, the device sends an identifier of the aircraft A1 , ... AN and an identifier for the runway. The transmission of this data is done by means of a suitable communication system. This means of communication can be integrated into the radar sensor 1 of each on-board device, the data transmissions being carried out by modulation of the data on the radar wave. In other words, the modulation of the transmitted radar wave codes the transmitted data. This server uses this tagged data to train neural networks associated respectively with the corresponding tracks. The training (or learning) of neural networks is done from the data stored in the server 41, the learning notably consisting in learning at least one landing trajectory on the identified runway. The server sends 43 to the various aircraft the trained neural networks (forming the functional block 3 in each landing aid device). More precisely in this step of sending 43 of a trained neural network to an aircraft, the neural network is transmitted to a means for controlling the trajectory of this aircraft, this means typically being the functional block 3 then the shaping and operating block 4 whose operations have been described previously. This command means allowing the display of the trajectory for the piloting aid or directly allowing to command and correct the trajectory of the aircraft. Since the different radar sensors 1 may have a bias, in particular at the level of the installation plane at installation in the aircraft, provision is made according to the invention to compensate for these different biases. FIG. 5 illustrates the learning of the neural network corresponding to a PI track, taking into account the bias linked to an aircraft Al. The training data coming from the aircraft Al after landing on the runway PI are sent back to the centralized server 41. The bias linked to the device A1 is estimated by a processing means 51 installed in the server 41. This bias estimation 51 is carried out before the data integrates the database 10 of learning the PI track. This step makes it possible to normalize the data from the various aircraft and effectively converge the learning of the neural network associated with this PI track implemented in a module 300. The trained and standardized network is then transmitted to the various aircraft, after application 52 d '' a corrective bias relating to each aircraft. With this neural network, each device has a landing aid function on an extensive database which advantageously offers good accuracy thanks to the sharing of data. FIG. 6 illustrates an example of estimation of the bias linked to each aircraft. Other methods can be used. In the example of FIG. 4, the landing data (labeled radar images) sent by the aircraft Al relating to the landings on the different runways used are aggregated in a memory 61. These radar images are compared 62 to reference images, each of these reference images being specific to a track and to its environment. These images include, for example, buildings and infrastructure. The reference images are for example digital terrain models (DTM), and can be digital elevation models when they include buildings and infrastructure. This comparison 62 between the labeled radar images (in particular with the position of the device) and the reference images makes it possible, for each aircraft Al,... AN, to estimate 63 the bias between the image taken by the radar sensor and the point of view of the aircraft projected in the reference images, for example in the digital terrain model. The main bias is related to the plane of installation of the radar sensor and it leads to a systematic angular error compared to a standardized reference linked to the axes of the aircraft. Cross-referencing of data in relation to several tracks improves the estimation of this systematic error which can then be finely corrected. Advantageously, the invention makes it possible to produce a collective database for learning about neural networks. This provides a larger data set which improves the quality of learning, in order to achieve good accuracy in positioning the runways in relation to aircraft. In particular, a device landing for the first time on a runway benefits from the collective experience, while taking into account the specific characteristics of its sensor. Advantageously also, the landing by means of a device according to the invention is particularly robust to point variations in the environment, for example the presence of vehicles or seasonal vegetation, which pose problems with fixed algorithms. In addition, the invention adapts to perennial variations in the environment, such as new construction or infrastructure for example, by integrating these elements into the learning base.
权利要求:
Claims (1) [1" id="c-fr-0001] Claims [Claim 1] Method for learning a neural network on board an aircraft (A1) to aid the landing of said aircraft on at least one given runway (PI), characterized in that using a fleet of aircraft being equipped at least one radar sensor (1), said method comprises at least: - A step of collective collection of radar images by a set of aircraft (A b ... A N ) of said fleet, said radar images being obtained by the radar sensors of said aircraft (Ai, ... A N ) in nominal landing phases on said runway, step in which each image collected by an aircraft is labeled with at least one position information of said runway (PI) with respect to said aircraft, said labeled image being sent (42) to a shared database (10, 41) and stored in said database; - A step of learning by a neural network (3) of said track from the labeled images stored in said shared database (41, 10), at the end of said step said neural network (3) being trained; - A step of sending (43) said trained neural network to at least one of said aircraft (Ai). [Claim 2] Method according to Claim 1, characterized in that the said neural network transmits to a display (5) and / or control means the trajectory of the said aircraft. [Claim 3] Method according to any one of the preceding claims, characterized in that said database (41, 10) comprises labeled radar images specific to several landing runways, the labeled images comprising identification of the imaged runway. [Claim 4] Method according to any one of the preceding claims, characterized in that each labeled image includes the identification of the aircraft having emitted said image. [Claim 5] Method according to claim 4, characterized in that:- the radar images being marred by a bias specific to the installation of said radar sensor (1) on each aircraft, said bias is estimated (51) for each radar image before its storage in said database (10), the bias estimated being stored with said image;- the trained neural network being transmitted to a given aircraft with the estimated bias specific to this aircraft. [Claim 6] Method according to any one of the preceding claims, characterized in that restoring said bias for a given aircraft (AJ and for a given runway is carried out by comparison between at least one radar image obtained by the radar sensor (1) equipping said aircraft and a reference image of said track and its environment. [Claim 7] Method according to claim 6, characterized in that said reference image is constituted by a digital terrain model. [Claim 8] Method according to any one of the preceding claims, characterized in that the means of transmitting said labeled images between an aircraft and said database (41, 10) are made by means of the radar sensor (1) equipping said aircraft, transmissions being produced by modulating the data forming said images on the radar wave. [Claim 9] Method according to any one of the preceding claims, characterized in that, for an aircraft carrying said radar sensor (1), the labeling of said images comprises at least one of the following indications:- Date of acquisition of the image relative to the moment of touchdown of said wearer on the runways;- Location of said wearer when taking the image:* Absolute: GPS position;* Relative to the runway: inertial unit;- Altitude of said carrier;- Attitude of said wearer;- Speed vector of said carrier (obtained by said radar sensor as a function of its speed relative to the ground);- Acceleration vector of said carrier (obtained by said radar sensor as a function of its speed relative to the ground);- Position, relative to said carrier, of the runway and of reference structures obtained by optical means with precise location. [Claim 10] Method according to any one of the preceding claims, characterized in that said database (41, 10) is updated throughout the nominal landings carried out by said aircraft on at least said runway. [Claim 11] Server characterized in that it includes a database (10) for learning an on-board neural network for implementing the method according to any one of the preceding claims, said server being able to communicate with aircraft (A b [Claim 12] ··· A n ). Server according to claim 11, characterized in that said neural network is trained (300) in said server, the trained network being transmitted to at least one of said aircraft. 1/3
类似技术:
公开号 | 公开日 | 专利标题 EP0678841B1|1998-10-28|Landing aid device EP2375299B1|2014-04-23|Flight management system for an unmanned aircraft EP3657213B1|2022-03-09|Learning method of a neural network on-board an aircraft for landing assistance of said aircraft and server for implementing such a method CA2614541C|2015-04-21|Device for assisting a vertical guidance approach for aircraft EP1870789A1|2007-12-26|System for detecting obstacles in the proximity of a landing point FR2638544A1|1990-05-04|SYSTEM FOR DETERMINING THE SPATIAL POSITION OF A MOVING OBJECT, PARTICULARLY APPLYING TO THE LANDING OF AIRCRAFT FR2898673A1|2007-09-21|METHOD FOR AIDING NAVIGATION OF AN AIRCRAFT WITH FLIGHT PLAN UPDATE FR2935521A1|2010-03-05|Take-off parameters coherence verifying method for securing take-off of airplane from airport, involves comparing take-off distance with remaining runway length, and authorizing take-off of aircraft, if distance is lower than runway length FR3024127A1|2016-01-29|AUTONOMOUS AUTOMATIC LANDING METHOD AND SYSTEM FR2736149A1|1997-01-03|DEVICE FOR RECOGNIZING AND TRACKING OBJECTS FR2908219A1|2008-05-09|Aircraft e.g. civilian cargo airplane, guiding device, has guiding unit provided with aircraft guiding rules, and processing all guiding controls of aircraft irrespective of guiding modes, where controls are transmitted to activation unit FR3043456A1|2017-05-12|METHOD AND DEVICE FOR GENERATING AN OPTIMUM VERTICAL TRACK TO BE FOLLOWED BY AN AIRCRAFT. FR3054357A1|2018-01-26|METHOD AND DEVICE FOR DETERMINING THE POSITION OF AN AIRCRAFT DURING AN APPROACH FOR LANDING FR2987911A1|2013-09-13|METHOD OF CORRECTING A LATERAL TRACK IN APPROACH IN RELATION TO ENERGY TO BE RESORBED FR2787907A1|2000-06-30|AID SYSTEM FOR AVOIDING AIRCRAFT COLLISIONS WITH THE GROUND EP3656681A1|2020-05-27|Device and method for assisting in the landing of an aircraft in conditions of reduced visibility FR3030794A1|2016-06-24|METHOD AND SYSTEM FOR GUIDING AN AIRCRAFT CA2460605C|2011-11-01|Procedure and device for identifying at least one component of information about the vertical position of an aircraft EP1205732A2|2002-05-15|Inertial navigation unit comprising an integrated GPS receiver EP2193477B1|2020-12-09|Method and system for aircraft taxiing assistance WO2014146884A1|2014-09-25|Method for observing an area by means of a drone CA3037319A1|2019-09-20|Operational flight plan establishment system for an aircraft and associated method FR2925710A1|2009-06-26|Flight line modifying method for e.g. cargo aircraft, involves calculating coordinates of turning point from flight distance in plane of follower aircraft, and implementing trajectory section in flight line to obtain modified flight line WO2021089539A1|2021-05-14|Method and device for assisting in landing an aircraft under poor visibility conditions EP3667366A1|2020-06-17|Method and device for assisting in the landing of an aircraft for aligning said aircraft on a runway
同族专利:
公开号 | 公开日 FR3089038B1|2020-10-30| CN111209927A|2020-05-29| EP3657213B1|2022-03-09| EP3657213A1|2020-05-27| US20200168111A1|2020-05-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US6018698A|1994-05-31|2000-01-25|Winged Systems Corporation|High-precision near-land aircraft navigation system| US20180024237A1|2016-07-21|2018-01-25|Airbus Operations Sas|Method and device for determining the position of an aircraft in an approach for a landing| FR3111120A1|2020-06-04|2021-12-10|Airbus Operations |Improved flight vision system for aircraft.| CN113138382B|2021-04-27|2021-11-02|中国电子科技集团公司第二十八研究所|Fully-automatic approach landing monitoring method for civil and military airport|
法律状态:
2019-10-29| PLFP| Fee payment|Year of fee payment: 2 | 2020-05-29| PLSC| Publication of the preliminary search report|Effective date: 20200529 | 2020-10-26| PLFP| Fee payment|Year of fee payment: 3 | 2021-11-09| PLFP| Fee payment|Year of fee payment: 4 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1871698A|FR3089038B1|2018-11-22|2018-11-22|PROCESS FOR LEARNING A NETWORK OF NEURONES ON BOARD IN AN AIRCRAFT FOR LANDING ASSISTANCE OF THE SAID AIRCRAFT AND SERVER FOR THE IMPLEMENTATION OF SUCH A PROCEDURE|FR1871698A| FR3089038B1|2018-11-22|2018-11-22|PROCESS FOR LEARNING A NETWORK OF NEURONES ON BOARD IN AN AIRCRAFT FOR LANDING ASSISTANCE OF THE SAID AIRCRAFT AND SERVER FOR THE IMPLEMENTATION OF SUCH A PROCEDURE| EP19209738.4A| EP3657213B1|2018-11-22|2019-11-18|Learning method of a neural network on-board an aircraft for landing assistance of said aircraft and server for implementing such a method| CN201911147782.0A| CN111209927A|2018-11-22|2019-11-21|Learning method for a neural network embedded in an aircraft for assisting the landing of the aircraft and server for implementing the method| US16/691,031| US20200168111A1|2018-11-22|2019-11-21|Learning method for a neural network embedded in an aircraft for assisting in the landing of said aircraft and server for implementing such a method| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|